Statistical Learnability of Generalized Additive Models based on Total Variation Regularization
نویسنده
چکیده
where x ∈ R denotes a sample and xj ∈ R denotes the j-th explanatory variable for each j ∈ [1, p] , {j ∈ N|1 ≤ j ≤ p}. This was first proposed by Hastie and Tibshirani (1987) and is known as a generalized additive model (GAM). In this paper, we call fj(·) a weight function and f(·) a GAM predictor. This not only includes linear predictors but also captures nonlinear relationships between explanatory variables and the targeted values. Although complex interactions or dependencies among explanatory values are not expressed, GAM predictors are expected to exhibit higher predictive performance when properly learned from a sufficiently large amount of data, at least in comparison with simple linear models. There has already been substantial work on data mining and statistics using GAMs (Guisan et al., 2002; Wood, 2006). We first introduce the total variation (TV) of a function as a measure of complexity of functions in Lc(R)-space. Here, L 1 c(R) denotes a space of functions with compact support in L 1-space on R. Secondly, we introduce the sum of TV among all weight functions as a natural measure of complexity for GAM predictors:
منابع مشابه
Boosting Algorithms: Regularization, Prediction and Model Fitting
We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as well as regression models for survival analysis. Concepts of degrees of freedom and corresponding Akaike or Bayesian information criteria, particularly useful for regularization and variable selectio...
متن کاملA distributed algorithm for fitting generalized additive models
Generalized additive models are an effective regression tool, popular in the statistics literature, that provides an automatic extension of traditional linear models to nonlinear systems. We present a distributed algorithm for fitting generalized additive models, based on the alternating direction method of multipliers (ADMM). In our algorithm the component functions of the model are fit indepe...
متن کاملspikeSlabGAM: Bayesian Variable Selection, Model Choice and Regularization for Generalized Additive Mixed Models in R
The R package spikeSlabGAM implements Bayesian variable selection, model choice, and regularized estimation in (geo-)additive mixed models for Gaussian, binomial, and Poisson responses. Its purpose is to (1) choose an appropriate subset of potential covariates and their interactions, (2) to determine whether linear or more flexible functional forms are required to model the effects of the respe...
متن کاملBootstrapping with Noise: an Eeective Regularization Technique
Bootstrap samples with noise are shown to be an eeective smoothness and capacity control technique for training feed-forward networks and for other statistical methods such as generalized additive models. It is shown that noisy bootstrap performs best in conjunction with weight decay regularization and ensemble averaging. The two-spiral problem, a highly non-linear noise-free data, is used to d...
متن کاملSpatially dependent regularization parameter selection in total generalized variation models for image restoration
The automated spatially dependent regularization parameter selection framework of [9] for multi-scale image restoration is applied to total generalized variation (TGV) of order two. Well-posedness of the underlying continuous models is discussed and an algorithm for the numerical solution is developed. Experiments confirm that due to the spatially adapted regularization parameter the method all...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1802.03001 شماره
صفحات -
تاریخ انتشار 2018